466 research outputs found

    China has reached the lewis turning point

    Get PDF
    In the past several years, labor shortages in China have become an issue. However, there is heated debate as to whether China has passed the Lewis turning point and moved from a period of unlimited supply to a new era of labor shortage. Most empirical studies on this topic focus on estimation of total labor supply and demand. Yet the poor quality of China’s labor statistics leaves the debate open. In this paper, China’s position along the Lewis continuum is examined though primary surveys of wage rates, which offer a more reliable statistic than employment data. Our results show a clear rising trend in real wage rates since 2003. The acceleration of real wages even in slack seasons indicates that the era of surplus labor is over. This finding has important policy implications for China’s future development.dual economy, employment data, Labor market, Lewis model, Supply and demand, surplus labor, wage rates,

    The Competitive Saving Motive: Evidence from Rising Sex Ratios and Savings Rates in China

    Get PDF
    The high and rising household savings rate in China is not easily reconciled with the traditional explanations that emphasize life cycle factors, the precautionary saving motive, financial development, or habit formation. This paper proposes a new competitive saving motive: As the sex ratio rises, Chinese parents with a son raise their savings in a competitive manner in order to improve their son's relative attractiveness for marriage. The pressure on savings spills over to other households. Both cross-regional and household-level evidence supports this hypothesis. This factor can potentially account for about half of the actual increase in the household savings rate during 1990-2007.

    Metric Learning for Projections Bias of Generalized Zero-shot Learning

    Full text link
    Generalized zero-shot learning models (GZSL) aim to recognize samples from seen or unseen classes using only samples from seen classes as training data. During inference, GZSL methods are often biased towards seen classes due to the visibility of seen class samples during training. Most current GZSL methods try to learn an accurate projection function (from visual space to semantic space) to avoid bias and ensure the effectiveness of GZSL methods. However, during inference, the computation of distance will be important when we classify the projection of any sample into its nearest class since we may learn a biased projection function in the model. In our work, we attempt to learn a parameterized Mahalanobis distance within the framework of VAEGAN (Variational Autoencoder \& Generative Adversarial Networks), where the weight matrix depends on the network's output. In particular, we improved the network structure of VAEGAN to leverage the discriminative models of two branches to separately predict the seen samples and the unseen samples generated by this seen one. We proposed a new loss function with two branches to help us learn the optimized Mahalanobis distance representation. Comprehensive evaluation benchmarks on four datasets demonstrate the superiority of our method over the state-of-the-art counterparts. Our codes are available at https://anonymous.4open.science/r/111hxr.Comment: 9 pages, 2 figure

    A Simple and Effective Baseline for Attentional Generative Adversarial Networks

    Full text link
    Synthesising a text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task. In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, which adopts a self-distillation technique to improve the performance of the generator and the quality of image generation, and Stack-GAN++, which gradually improves the details and quality of the image by stacking multiple generators and discriminators. However, this series of improvements to GAN all have redundancy to a certain extent, which affects the generation performance and complexity to a certain extent. We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN. (2) to integrate and reconstruct multiple losses of DAMSM. Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged and finally proposed our \textbf{SEAttnGAN}. Code is avalilable at https://github.com/jmyissb/SEAttnGAN.Comment: 12 pages, 3 figure
    corecore